Lexington
The 45 planets most likely to host alien life, according to astronomers
'Project Hail Mary' may be fiction, but this list could still come in handy. An artist's impression of a theoretical planet orbiting a redder star, which could cause microbes and plants on the planet's surface to reflect very different colors from Earth's green forests. Breakthroughs, discoveries, and DIY tips sent six days a week. Life on Earth is a precious thing, especially given what astronomers know about the visible universe. Although researchers have so far identified over 6,000 exoplanets beyond our solar system, only a handful of them be suitable for human visitors.
- North America > United States > Kentucky > Fayette County > Lexington (0.05)
- Europe > Spain (0.05)
- North America > United States > Kentucky > Fayette County > Lexington (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- (4 more...)
- Oceania > Australia > New South Wales > Sydney (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (20 more...)
100 mystery sounds under review for signs of extraterrestrial life
Over 11 years, citizen scientists collected billions of data signals for the SETI@home project. Breakthroughs, discoveries, and DIY tips sent six days a week. After reviewing almost 30 years of signals, University of California Berkeley researchers have identified 100 mysterious, deep-space radio blips they want to review for signs of extraterrestrial life . And they couldn't have done it without 11 years of volunteer work from millions of PC owners around the world. Even with today's advanced computers, the world's most complex data problems can't be solved by a single machine.
- North America > United States > California > Alameda County > Berkeley (0.25)
- North America > Puerto Rico > Arecibo > Arecibo (0.07)
- North America > United States > Massachusetts (0.05)
- (4 more...)
Bridging the Clinical Expertise Gap: Development of a Web-Based Platform for Accessible Time Series Forecasting and Analysis
Mullen, Aaron D., Harris, Daniel R., Slavova, Svetla, Bumgardner, V. K. Cody
Time series forecasting has applications across domains and industries, especially in healthcare, but the technical expertise required to analyze data, build models, and interpret results can be a barrier to using these techniques. This article presents a web platform that makes the process of analyzing and plotting data, training forecasting models, and interpreting and viewing results accessible to researchers and clinicians. Users can upload data and generate plots to showcase their variables and the relationships between them. The platform supports multiple forecasting models and training techniques which are highly customizable according to the user's needs. Additionally, recommendations and explanations can be generated from a large language model that can help the user choose appropriate parameters for their data and understand the results for each model. The goal is to integrate this platform into learning health systems for continuous data collection and inference from clinical pipelines.
- Health & Medicine > Therapeutic Area > Immunology (0.47)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.46)
Understanding Robustness of Model Editing in Code LLMs: An Empirical Study
Chhetri, Vinaik, Siddique, A. B, Farooq, Umar
Large language models (LLMs) are increasingly used in software development. However, while LLMs remain static after pretraining, programming languages and APIs continue to evolve, leading to the generation of deprecated or incompatible code that undermines reliability. Retraining LLMs from scratch to reflect such changes is computationally expensive, making model editing a promising lightweight alternative that updates only a small subset of parameters. Despite its potential, it remains unclear whether model editing yields genuine syntactic and semantic adaptations or merely superficial fixes. In this work, we present a systematic study of five state-of-the-art model editing methods: Constrained Fine-Tuning (FT), GRACE, MEMIT, PMET, and ROME. We apply these methods to three leading open-source code LLMs, CodeLlama, CodeQwen1.5, and DeepSeek-Coder, under controlled API deprecation scenarios. Our evaluation covers both instant and sequential editing settings, using three disjoint evaluation sets designed to assess reliability, generalization, and specificity. We measure model correctness at three levels: successful compilation, partial test case pass, and full test pass. Our findings show that instant edits consistently degrade model performance, with syntactic validity dropping by up to 86 percentage points and functional correctness declining by 45 points even in the best-performing setting. Sequential edits further amplify this degradation, and in some cases, model performance collapses entirely. Across all models, most passing generations relied on workarounds rather than correctly adopting the intended changes, while faulty adoptions that result in test failures or compilation errors were significantly more frequent. Correct adoptions, where the model correctly integrates the intended change, occurred in only about 6% of cases.
- North America > United States > Louisiana > East Baton Rouge Parish > Baton Rouge (0.14)
- North America > United States > Kentucky > Fayette County > Lexington (0.14)
- North America > Dominican Republic (0.04)
- (4 more...)
Spatio-temporal Multivariate Time Series Forecast with Chosen Variables
Liu, Zibo, Jiang, Zhe, Xu, Zelin, Xiao, Tingsong, Zhang, Yupu, Xiao, Zhengkun, Wang, Haibo, Chen, Shigang
Spatio-Temporal Multivariate time series Forecast (STMF) uses the time series of $n$ spatially distributed variables in a period of recent past to forecast their values in a period of near future. It has important applications in spatio-temporal sensing forecast such as road traffic prediction and air pollution prediction. Recent papers have addressed a practical problem of missing variables in the model input, which arises in the sensing applications where the number $m$ of sensors is far less than the number $n$ of locations to be monitored, due to budget constraints. We observe that the state of the art assumes that the $m$ variables (i.e., locations with sensors) in the model input are pre-determined and the important problem of how to choose the $m$ variables in the input has never been studied. This paper fills the gap by studying a new problem of STMF with chosen variables, which optimally selects $m$-out-of-$n$ variables for the model input in order to maximize the forecast accuracy. We propose a unified framework that jointly performs variable selection and model optimization for both forecast accuracy and model efficiency. It consists of three novel technical components: (1) masked variable-parameter pruning, which progressively prunes less informative variables and attention parameters through quantile-based masking; (2) prioritized variable-parameter replay, which replays low-loss past samples to preserve learned knowledge for model stability; (3) dynamic extrapolation mechanism, which propagates information from variables selected for the input to all other variables via learnable spatial embeddings and adjacency information. Experiments on five real-world datasets show that our work significantly outperforms the state-of-the-art baselines in both accuracy and efficiency, demonstrating the effectiveness of joint variable selection and model optimization.
- North America > United States > Florida > Alachua County > Gainesville (0.15)
- North America > United States > Kentucky > Fayette County > Lexington (0.14)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- (2 more...)
- Transportation (0.47)
- Information Technology (0.46)
Poisson Flow Consistency Training
Zhang, Anthony, Gokmen, Mahmut, Hein, Dennis, Ge, Rongjun, Xia, Wenjun, Wang, Ge, Chen, Jin
The Poisson Flow Consistency Model (PFCM) is a consistency-style model based on the robust Poisson Flow Generative Model++ (PFGM++) which has achieved success in unconditional image generation and CT image denoising. Yet the PFCM can only be trained in distillation which limits the potential of the PFCM in many data modalities. The objective of this research was to create a method to train the PFCM in isolation called Poisson Flow Consistency Training (PFCT). The perturbation kernel was leveraged to remove the pretrained PFGM++, and the sinusoidal discretization schedule and Beta noise distribution were introduced in order to facilitate adaptability and improve sample quality. The model was applied to the task of low dose computed tomography image denoising and improved the low dose image in terms of LPIPS and SSIM. It also displayed similar denoising effectiveness as models like the Consistency Model. PFCT is established as a valid method of training the PFCM from its effectiveness in denoising CT images, showing potential with competitive results to other generative models. Further study is needed in the precise optimization of PFCT and in its applicability to other generative modeling tasks. The framework of PFCT creates more flexibility for the ways in which a PFCM can be created and can be applied to the field of generative modeling.
- North America > United States > Kentucky > Fayette County > Lexington (0.14)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- North America > United States > Alabama > Jefferson County > Birmingham (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (0.95)
- Health & Medicine > Nuclear Medicine (0.94)
A Community-driven vision for a new Knowledge Resource for AI
Chaudhri, Vinay K, Baru, Chaitan, Bennett, Brandon, Bhatt, Mehul, Cassel, Darion, Cohn, Anthony G, Dechter, Rina, Erdem, Esra, Ferrucci, Dave, Forbus, Ken, Gelfond, Gregory, Genesereth, Michael, Gordon, Andrew S., Grosof, Benjamin, Gupta, Gopal, Hendler, Jim, Israni, Sharat, Josephson, Tyler R., Kyllonen, Patrick, Lierler, Yuliya, Lifschitz, Vladimir, McFate, Clifton, McGinty, Hande K., Morgenstern, Leora, Oltramari, Alessandro, Paritosh, Praveen, Roth, Dan, Shepard, Blake, Shimzu, Cogan, Vrandečić, Denny, Whiting, Mark, Witbrock, Michael
The Cyc project, started in 1984, created the first large-scale database of commonsense knowledge. The initiative continues to this day with its aim to provide a comprehensive ontology and knowledge base of commonsense knowledge to enable human-like reasoning for AI systems. In the concluding paragraph of his Communications of the Association of Computing Machinery (CACM) 1995 article A Large-Scale Investment in Knowledge Infrastructure [52], Cyc's founder Douglas B. Lenat wrote: Is Cyc necessary? How far would a user get with something simpler than Cyc but that lacks everyday commonsense knowledge? Nobody knows; the question will be settled empirically. Our guess is most of these applications will eventually tap the synergy in a suite of sources (including neural nets and decision theory), one of which will be Cyc. Although 30 years have passed since the above article was written, AI research community has not conclusively settled [10] the question "How far would a user get with something simpler than Cyc but that lacks everyday commonsense knowledge?" However, it is clear that significant strides have been made in addressing many of the tasks that were original Cyc use cases, including information retrieval, semi-automatically linking multiple heterogeneous external information sources, spelling and grammar correction, machine translation, natural language understanding and speech understanding.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Switzerland (0.05)
- (14 more...)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- Health & Medicine (1.00)
- Education > Educational Setting (0.93)
- Leisure & Entertainment (0.93)
- Information Technology > Artificial Intelligence > Systems & Languages > Problem-Independent Architectures (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Ontologies (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Commonsense Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)
FOSSIL: Regret-Minimizing Curriculum Learning for Metadata-Free and Low-Data Mpox Diagnosis
Han, Sahng-Min, Kim, Minjae, Cha, Jinho, Choe, Se-woon, Cha, Eunchan Daniel, Choi, Jungwon, Jung, Kyudong
Deep learning in small and imbalanced biomedical datasets remains fundamentally constrained by unstable optimization and poor generalization. We present the first biomedical implementation of FOSSIL (Flexible Optimization via Sample-Sensitive Importance Learning), a regret-minimizing weighting framework that adaptively balances training emphasis according to sample difficulty. Using softmax-based uncertainty as a continuous measure of difficulty, we construct a four-stage curriculum (Easy-Very Hard) and integrate FOSSIL into both convolutional and transformer-based architectures for Mpox skin lesion diagnosis. Across all settings, FOSSIL substantially improves discrimination (AUC = 0.9573), calibration (ECE = 0.053), and robustness under real-world perturbations, outperforming conventional baselines without metadata, manual curation, or synthetic augmentation. The results position FOSSIL as a generalizable, data-efficient, and interpretable framework for difficulty-aware learning in medical imaging under data scarcity.
- North America > United States > Kentucky > Fayette County > Lexington (0.14)
- Asia > South Korea (0.04)
- Asia > Bangladesh (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)